Classifier Instability and Partitioning
نویسنده
چکیده
Various methods exist for reducing correlation between classifiers in a multiple classifier framework. The expectation is that the composite classifier will exhibit improved performance and/or be simpler to automate compared with a single classifier. In this paper we investigate how generalisation is affected by varying complexity of unstable base classifiers, implemented as identical single hidden layer MLP networks with fixed parameters. A technique that uses recursive partitioning for selectively perturbing the training set is also introduced, and shown to improve performance and reduce sensitivity to base classifier complexity. Benchmark experiments include artificial and real data with optimal error rates greater than eighteen percent.
منابع مشابه
Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces
A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble lin...
متن کاملModel-based Recursive Partitioning
Recursive partitioning is embedded into the general and well-established class of parametric models that can be fitted using M-type estimators (including maximum likelihood). An algorithm for model-based recursive partitioning is suggested for which the basic steps are: (1) fit a parametric model to a data set, (2) test for parameter instability over a set of partitioning variables, (3) if ther...
متن کاملCombined classifier based on feature space partitioning
This paper presents a significant modification to the AdaSS (Adaptive Splitting and Selection) algorithm, which was developed several years ago. The method is based on the simultaneous partitioning of the feature space and an assignment of a compound classifier to each of the subsets. The original version of the algorithm uses a classifier committee and a majority voting rule to arrive at a dec...
متن کاملMaximin affinity learning of image segmentation
Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure ...
متن کامل1 A NONPARAMETRIC MULTICLASS PARTITIONING METHOD FOR CLASSIFICATION by SAUL BRIAN GELFAND
c classes are characterized by unknown probability distributions. A data sample containing labelled vectors from each of the c classes is available. The data sample is divided into test and training samples. A classifier is designed based on the training sample and evaluated with the test sample. The classifier is also evaluated based on its asymptotic properties as sample size increases. A mul...
متن کامل